Goto

Collaborating Authors

 safety and security


Trump signs new executive orders intended to make flying cars a reality, slash flight times

FOX News

A aviation company is turning heads with an electric vertical take-off and landing vehicle. President Donald Trump signed three new executive orders on Friday aimed at accelerating American drone innovation and supersonic air travel, while also restoring security to American airspace. The three orders will be critical to American safety and security, White House officials involved in the drafting of the orders indicated, particularly in light of major worldwide events coming to the United States in the next few years, such as the World Cup and the Olympics. In addition to bolstering safety and security, the new orders will also spur greater innovation in the aerospace and drone sectors, something White House officials said has been stifled in recent years as a result of burdensome regulations. "Flying cars are not just for the Jetsons," Michael Kratsios, a lead tech policy adviser at the White House said.


aiXamine: Simplified LLM Safety and Security

Deniz, Fatih, Popovic, Dorde, Boshmaf, Yazan, Jeong, Euisuh, Ahmad, Minhaj, Chawla, Sanjay, Khalil, Issa

arXiv.org Artificial Intelligence

Evaluating Large Language Models (LLMs) for safety and security remains a complex task, often requiring users to navigate a fragmented landscape of ad hoc benchmarks, datasets, metrics, and reporting formats. To address this challenge, we present aiXamine, a comprehensive black-box evaluation platform for LLM safety and security. aiXamine integrates over 40 tests (i.e., benchmarks) organized into eight key services targeting specific dimensions of safety and security: adversarial robustness, code security, fairness and bias, hallucination, model and data privacy, out-of-distribution (OOD) robustness, over-refusal, and safety alignment. The platform aggregates the evaluation results into a single detailed report per model, providing a detailed breakdown of model performance, test examples, and rich visualizations. We used aiXamine to assess over 50 publicly available and proprietary LLMs, conducting over 2K examinations. Our findings reveal notable vulnerabilities in leading models, including susceptibility to adversarial attacks in OpenAI's GPT-4o, biased outputs in xAI's Grok-3, and privacy weaknesses in Google's Gemini 2.0. Additionally, we observe that open-source models can match or exceed proprietary models in specific services such as safety alignment, fairness and bias, and OOD robustness. Finally, we identify trade-offs between distillation strategies, model size, training methods, and architectural choices.


Achieving the Safety and Security of the End-to-End AV Pipeline

Curran, Noah T., Cho, Minkyoung, Feng, Ryan, Liu, Liangkai, Tang, Brian Jay, MohajerAnsari, Pedram, Domeke, Alkim, Pesé, Mert D., Shin, Kang G.

arXiv.org Artificial Intelligence

In the current landscape of autonomous vehicle (AV) safety and security research, there are multiple isolated problems being tackled by the community at large. Due to the lack of common evaluation criteria, several important research questions are at odds with one another. For instance, while much research has been conducted on physical attacks deceiving AV perception systems, there is often inadequate investigations on working defenses and on the downstream effects of safe vehicle control. This paper provides a thorough description of the current state of AV safety and security research. We provide individual sections for the primary research questions that concern this research area, including AV surveillance, sensor system reliability, security of the AV stack, algorithmic robustness, and safe environment interaction. We wrap up the paper with a discussion of the issues that concern the interactions of these separate problems. At the conclusion of each section, we propose future research questions that still lack conclusive answers. This position article will serve as an entry point to novice and veteran researchers seeking to partake in this research domain.


AI Risk Management Should Incorporate Both Safety and Security

Qi, Xiangyu, Huang, Yangsibo, Zeng, Yi, Debenedetti, Edoardo, Geiping, Jonas, He, Luxi, Huang, Kaixuan, Madhushani, Udari, Sehwag, Vikash, Shi, Weijia, Wei, Boyi, Xie, Tinghao, Chen, Danqi, Chen, Pin-Yu, Ding, Jeffrey, Jia, Ruoxi, Ma, Jiaqi, Narayanan, Arvind, Su, Weijie J, Wang, Mengdi, Xiao, Chaowei, Li, Bo, Song, Dawn, Henderson, Peter, Mittal, Prateek

arXiv.org Artificial Intelligence

The exposure of security vulnerabilities in safety-aligned language models, e.g., susceptibility to adversarial attacks, has shed light on the intricate interplay between AI safety and AI security. Although the two disciplines now come together under the overarching goal of AI risk management, they have historically evolved separately, giving rise to differing perspectives. Therefore, in this paper, we advocate that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security, and unambiguously take into account the perspectives of both disciplines in order to devise mostly effective and holistic risk mitigation approaches. Unfortunately, this vision is often obfuscated, as the definitions of the basic concepts of "safety" and "security" themselves are often inconsistent and lack consensus across communities. With AI risk management being increasingly cross-disciplinary, this issue is particularly salient. In light of this conceptual challenge, we introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security, aiming to facilitate a shared understanding and effective collaboration across communities.


SISSA: Real-time Monitoring of Hardware Functional Safety and Cybersecurity with In-vehicle SOME/IP Ethernet Traffic

Liu, Qi, Li, Xingyu, Sun, Ke, Li, Yufeng, Liu, Yanchen

arXiv.org Artificial Intelligence

Scalable service-Oriented Middleware over IP (SOME/IP) is an Ethernet communication standard protocol in the Automotive Open System Architecture (AUTOSAR), promoting ECU-to-ECU communication over the IP stack. However, SOME/IP lacks a robust security architecture, making it susceptible to potential attacks. Besides, random hardware failure of ECU will disrupt SOME/IP communication. In this paper, we propose SISSA, a SOME/IP communication traffic-based approach for modeling and analyzing in-vehicle functional safety and cyber security. Specifically, SISSA models hardware failures with the Weibull distribution and addresses five potential attacks on SOME/IP communication, including Distributed Denial-of-Services, Man-in-the-Middle, and abnormal communication processes, assuming a malicious user accesses the in-vehicle network. Subsequently, SISSA designs a series of deep learning models with various backbones to extract features from SOME/IP sessions among ECUs. We adopt residual self-attention to accelerate the model's convergence and enhance detection accuracy, determining whether an ECU is under attack, facing functional failure, or operating normally. Additionally, we have created and annotated a dataset encompassing various classes, including indicators of attack, functionality, and normalcy. This contribution is noteworthy due to the scarcity of publicly accessible datasets with such characteristics.Extensive experimental results show the effectiveness and efficiency of SISSA.


Police using AI could lead to 'predictive' crime prevention 'slippery slope,' experts argue

FOX News

Recording Industry Association of America CEO Mitch Glazier says the Human Artistry Campaign aims to protect professional creators' rights to their performances, voices and likenesses after AI creates Drake and The Weeknd songs. A pilot program in the U.K. to enhance police capabilities via artificial intelligence has proven successful but could pave the way for a slide into a future of "predictive policing," experts told Fox News Digital. "Artificial intelligence is a tool, like a firearm is a tool, and it can be useful, it can be deadly," Christopher Alexander, CCO of Liberty Blockchain, told Fox News Digital. "In terms of the Holy Grail here, I really think it is the predictive analytics capability that if they get better at that, you have some very frightening capabilities." British police in different communities have experimented with an artificial intelligence-powered (AI) system to help catch drivers committing violations, such as using their phones while driving or driving without a seat belt.


Computer Vision is becoming an accelerator for Education

#artificialintelligence

With a focus on safety and the opportunity to greatly enhance operations and the quality of research and learning, educational institutions could see significant gains by implementing computer vision with real-time federated analytics. Computer vision is revolutionizing many industries but is still making inroads into education. That's not surprising, given the historically tight budgets for many educational institutions. As the technology advances and becomes more mainstream in the commercial world, colleges and universities are more likely to be the first adopters in the education realm. With a camera infrastructure already in place on most education campuses, along with adequate district, campus and departmental networks, much of the infrastructure needed for computer vision is already in place.


Security and Safety Aspects of AI in Industry Applications

Doran, Hans Dermot

arXiv.org Artificial Intelligence

In this relatively informal discussion-paper we summarise issues in the domains of safety and security in machine learning that will affect industry sectors in the next five to ten years. Various products using neural network classification, most often in vision related applications but also in predictive maintenance, have been researched and applied in real-world applications in recent years. Nevertheless, reports of underlying problems in both safety and security related domains, for instance adversarial attacks have unsettled early adopters and are threatening to hinder wider scale adoption of this technology. The problem for real-world applicability lies in being able to assess the risk of applying these technologies. In this discussion-paper we describe the process of arriving at a machine-learnt neural network classifier pointing out safety and security vulnerabilities in that workflow, citing relevant research where appropriate.


Examining the Future of Video Analytics: How AI and Machine Learning Play a Key Role? - Digital Journal

#artificialintelligence

Video analytics has caught the attention of several businesses as technology is progressing. What comes to our mind when we think of video analytics? Video analytics has caught the attention of several businesses as technology is progressing. Public safety and security are the two primary aspects of businesses and video analytics software can help by monitoring video streams in real-time. It offers robust and practical solutions by collecting data with consistency and accuracy.


Turing Institute panel discussion on interpretability, safety and security in AI

AIHub

A few months ago, the Alan Turing Institute played host to a conference on interpretability, safety, and security in AI. This public event brought together leading academics, industrialists, policy makers, and journalists to foster conversations with the wider public about the merits and risks of artificial intelligence technology. The talks are now all available to watch on the Institute's YouTube channel. As part of the conference, participants were treated to a lively panel debate, chaired by Hannah Fry, which saw Isaac Kohane, Aleksander Madry, Cynthia Rudin, and Manuela Veloso discuss a variety of topics. Amongst other things, they talked about breakthroughs in their respective fields, holding AI systems (and their creators) accountable, complicated decision making, interpretability, and misinformation.

  Genre: Collection (0.40)
  Industry: